Localization of autonomous unmanned aerial vehicles (UAVs) relies heavily on Global Navigation Satellite Systems (GNSS), which are susceptible to interference. Especially in security applications, robust localization algorithms independent of GNSS are needed to provide dependable operations of autonomous UAVs also in interfered conditions. Typical non-GNSS visual localization approaches rely on known starting pose, work only on a small-sized map, or require known flight paths before a mission starts. We consider the problem of localization with no information on initial pose or planned flight path. We propose a solution for global visual localization on a map at scale up to 100 km2, based on matching orthoprojected UAV images to satellite imagery using learned season-invariant descriptors. We show that the method is able to determine heading, latitude and longitude of the UAV at 12.6-18.7 m lateral translation error in as few as 23.2-44.4 updates from an uninformed initialization, also in situations of significant seasonal appearance difference (winter-summer) between the UAV image and the map. We evaluate the characteristics of multiple neural network architectures for generating the descriptors, and likelihood estimation methods that are able to provide fast convergence and low localization error. We also evaluate the operation of the algorithm using real UAV data and evaluate running time on a real-time embedded platform. We believe this is the first work that is able to recover the pose of an UAV at this scale and rate of convergence, while allowing significant seasonal difference between camera observations and map.
translated by 谷歌翻译
没有全球导航卫星系统(GNSS)的本地化是无人驾驶汽车(UAVS)自动操作中的关键功能。在已知地图上基于视觉的本地化可以是一个有效的解决方案,但是它受到两个主要问题的负担:根据天气和季节的不同,位置的外观不同,以及无人机相机图像和地图之间的透视差异使匹配变得难以匹配。在这项工作中,我们提出了一种本地化解决方案,该解决方案依靠无人机相机图像匹配,以与训练有素的卷积神经网络模型进行地理参与的正射击图,该模型与相机图像和地图之间的季节性外观差异(冬季夏季)不变。我们将解决方案的收敛速度和本地化精度与六种参考方法进行比较。结果表明,参考方法的重大改善,尤其是在较高的季节性变化下。我们最终证明了该方法成功本地无人机的能力,表明所提出的方法对透视变化是可靠的。
translated by 谷歌翻译